66 research outputs found

    Self Super-Resolution for Magnetic Resonance Images using Deep Networks

    Full text link
    High resolution magnetic resonance~(MR) imaging~(MRI) is desirable in many clinical applications, however, there is a trade-off between resolution, speed of acquisition, and noise. It is common for MR images to have worse through-plane resolution~(slice thickness) than in-plane resolution. In these MRI images, high frequency information in the through-plane direction is not acquired, and cannot be resolved through interpolation. To address this issue, super-resolution methods have been developed to enhance spatial resolution. As an ill-posed problem, state-of-the-art super-resolution methods rely on the presence of external/training atlases to learn the transform from low resolution~(LR) images to high resolution~(HR) images. For several reasons, such HR atlas images are often not available for MRI sequences. This paper presents a self super-resolution~(SSR) algorithm, which does not use any external atlas images, yet can still resolve HR images only reliant on the acquired LR image. We use a blurred version of the input image to create training data for a state-of-the-art super-resolution deep network. The trained network is applied to the original input image to estimate the HR image. Our SSR result shows a significant improvement on through-plane resolution compared to competing SSR methods.Comment: Accepted by IEEE International Symposium on Biomedical Imaging (ISBI) 201

    On Finite Difference Jacobian Computation in Deformable Image Registration

    Full text link
    Producing spatial transformations that are diffeomorphic has been a central problem in deformable image registration. As a diffeomorphic transformation should have positive Jacobian determinant ∣J∣|J| everywhere, the number of voxels with ∣J∣<0|J|<0 has been used to test for diffeomorphism and also to measure the irregularity of the transformation. For digital transformations, ∣J∣|J| is commonly approximated using central difference, but this strategy can yield positive ∣J∣|J|'s for transformations that are clearly not diffeomorphic -- even at the voxel resolution level. To show this, we first investigate the geometric meaning of different finite difference approximations of ∣J∣|J|. We show that to determine diffeomorphism for digital images, use of any individual finite difference approximations of ∣J∣|J| is insufficient. We show that for a 2D transformation, four unique finite difference approximations of ∣J∣|J|'s must be positive to ensure the entire domain is invertible and free of folding at the pixel level. We also show that in 3D, ten unique finite differences approximations of ∣J∣|J|'s are required to be positive. Our proposed digital diffeomorphism criteria solves several errors inherent in the central difference approximation of ∣J∣|J| and accurately detects non-diffeomorphic digital transformations

    Coordinate Translator for Learning Deformable Medical Image Registration

    Full text link
    The majority of deep learning (DL) based deformable image registration methods use convolutional neural networks (CNNs) to estimate displacement fields from pairs of moving and fixed images. This, however, requires the convolutional kernels in the CNN to not only extract intensity features from the inputs but also understand image coordinate systems. We argue that the latter task is challenging for traditional CNNs, limiting their performance in registration tasks. To tackle this problem, we first introduce Coordinate Translator, a differentiable module that identifies matched features between the fixed and moving image and outputs their coordinate correspondences without the need for training. It unloads the burden of understanding image coordinate systems for CNNs, allowing them to focus on feature extraction. We then propose a novel deformable registration network, im2grid, that uses multiple Coordinate Translator's with the hierarchical features extracted from a CNN encoder and outputs a deformation field in a coarse-to-fine fashion. We compared im2grid with the state-of-the-art DL and non-DL methods for unsupervised 3D magnetic resonance image registration. Our experiments show that im2grid outperforms these methods both qualitatively and quantitatively

    Optimal operating MR contrast for brain ventricle parcellation

    Full text link
    Development of MR harmonization has enabled different contrast MRIs to be synthesized while preserving the underlying anatomy. In this paper, we use image harmonization to explore the impact of different T1-w MR contrasts on a state-of-the-art ventricle parcellation algorithm VParNet. We identify an optimal operating contrast (OOC) for ventricle parcellation; by showing that the performance of a pretrained VParNet can be boosted by adjusting contrast to the OOC

    Intensity Inhomogeneity Correction of SD-OCT Data Using Macular Flatspace

    Get PDF
    Images of the retina acquired using optical coherence tomography (OCT) often suffer from intensity inhomogeneity problems that degrade both the quality of the images and the performance of automated algorithms utilized to measure structural changes. This intensity variation has many causes, including off-axis acquisition, signal attenuation, multi-frame averaging, and vignetting, making it difficult to correct the data in a fundamental way. This paper presents a method for inhomogeneity correction by acting to reduce the variability of intensities within each layer. In particular, the N3 algorithm, which is popular in neuroimage analysis, is adapted to work for OCT data. N3 works by sharpening the intensity histogram, which reduces the variation of intensities within different classes. To apply it here, the data are first converted to a standardized space called macular flat space (MFS). MFS allows the intensities within each layer to be more easily normalized by removing the natural curvature of the retina. N3 is then run on the MFS data using a modified smoothing model, which improves the efficiency of the original algorithm. We show that our method more accurately corrects gain fields on synthetic OCT data when compared to running N3 on non-flattened data. It also reduces the overall variability of the intensities within each layer, without sacrificing contrast between layers, and improves the performance of registration between OCT images

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration
    • …
    corecore